Sign language

A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns (manual communication, body language) to convey meaning—simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts.

Wherever communities of deaf people exist, sign languages develop. Their complex spatial grammars are markedly different from the grammars of spoken languages.[1][2] Hundreds of sign languages are in use around the world and are at the cores of local deaf cultures. Some sign languages have obtained some form of legal recognition, while others have no status at all.

Contents

History

One of the earliest written records of a signed language occurred in the fifth century BC, in Plato's Cratylus, where Socrates says: "If we hadn't a voice or a tongue, and wanted to express things to one another, wouldn't we try to make signs by moving our hands, head, and the rest of our body, just as dumb people do at present?"[3] It seems that groups of deaf people have used signed languages throughout history.

In 2nd-century Judea, the recording in the Mishnah tractate Gittin[4] stipulated that for the purpose of commercial transactions "A deaf-mute can hold a conversation by means of gestures. Ben Bathyra says that he may also do so by means of lip-motions." This teaching was well known in Jewish society where study of Mishnah was compulsory from childhood.

In 1620, Juan Pablo Bonet published Reducción de las letras y arte para enseñar a hablar a los mudos (‘Reduction of letters and art for teaching mute people to speak’) in Madrid.[5] It is considered the first modern treatise of Phonetics and Logopedia, setting out a method of oral education for the deaf people by means of the use of manual signs, in form of a manual alphabet to improve the communication of the mute or deaf people.

From the language of signs of Bonet, Charles-Michel de l'Épée published his manual alphabet in the 18th century, which has survived basically unchanged in France and North America until the present time.

Sign languages have often evolved around schools for deaf students. In 1755, Abbé de l'Épée founded the first school for deaf children in Paris; Laurent Clerc was arguably its most famous graduate. Clerc went to the United States with Thomas Hopkins Gallaudet to found the American School for the Deaf in Hartford, Connecticut, in 1817.[6] Gallaudet's son, Edward Miner Gallaudet founded a school for the deaf in 1857 in Washington, D.C., which in 1864 became the National Deaf-Mute College. Now called Gallaudet University, it is still the only liberal arts university for deaf people in the world.

In popular thought, each spoken language has a sign language counterpart. There is a sense in which this is true, in as much as a linguistic population generally contains Deaf members who often generate a sign language. In much the same way that geographical or cultural forces isolate populations and lead to the generation of different and distinct spoken languages, the same forces operate on signed languages and so they tend to maintain their identities through time in roughly the same areas of influence as the local spoken languages. This occurs even though sign languages generally do not have any linguistic relation to the spoken languages of the lands in which they arise. In fact, the correlation between signed and spoken languages is much more complex than commonly thought, and due to the geographic influences just mentioned, varies depending on the country more than the spoken language. For example, the US, Canada, UK, Australia and New Zealand all have English as their dominant language, but American Sign Language (ASL), used in the US and most parts of Canada, is derived from French Sign Language whereas the other three countries sign dialects of British, Australian and New Zealand Sign Language.[7] Similarly, the sign languages of Spain and Mexico are very different, despite Spanish being the national language in each country,[8] and the sign language used in Bolivia is based on ASL rather than any sign language that is used in a Spanish-speaking country.[9] Variations also arise within a 'national' sign language which don't necessarily correspond to dialect differences in the national spoken language; rather, they can usually be correlated to the geographic location of residential schools for the deaf.[10][11]

International Sign, formerly known as Gestuno, is used mainly at international Deaf events such as the Deaflympics and meetings of the World Federation of the Deaf. Recent studies claim that while International Sign is a kind of a pidgin, they conclude that it is more complex than a typical pidgin and indeed is more like a full signed language.[12]

Linguistics of sign

In linguistic terms, sign languages are as rich and complex as any oral language, despite the common misconception that they are not "real languages". Professional linguists have studied many sign languages and found that they exhibit the fundamental properties that exist in all languages.[13][14]

Sign languages are not mime – in other words, signs are conventional, often arbitrary and do not necessarily have a visual relationship to their referent, much as most spoken language is not onomatopoeic. While iconicity is more systematic and widespread in sign languages than in spoken ones, the difference is not categorical.[15] The visual modality allows the human preference for close connections between form and meaning, present but suppressed in spoken languages, to be more fully expressed.[16] This does not mean that signed languages are a visual rendition of an oral language. They have complex grammars of their own, and can be used to discuss any topic, from the simple and concrete to the lofty and abstract.

Sign languages, like oral languages, organize elementary, meaningless units (phonemes; once called cheremes in the case of sign languages) into meaningful semantic units. Like in spoken languages, these meaningless units are represented as (combinations of) features, although often also crude distinctions are made in terms of Handshape (or Handform), Orientation, Location (or Place of Articulation), Movement, and Non-manual expression.

Common linguistic features of many sign languages are the occurrence of classifiers, a high degree of inflection, and a topic-comment syntax. More than spoken languages, sign languages can convey meaning by simultaneous means, e.g. by the use of space, two manual articulators, and the signer's face and body. Though there is still much discussion on the topic of iconicity in Signed Languages, classifiers are generally perceived to be highly iconic, as these complex constructions "function as predicates that may express any or all of the following: motion, position, stative-descriptive, or handling information" [17] It needs to be noted that the term classifier is not used by every one working on these constructions. Across the field of sign language linguistics the same constructions are also referred with other terms.

Sign languages' relationships with oral languages

A common misconception is that sign languages are somehow dependent on oral languages, that is, that they are oral language spelled out in gesture, or that they were invented by hearing people. Hearing teachers in deaf schools, such as Thomas Hopkins Gallaudet, are often incorrectly referred to as “inventors” of sign language.

Although not part of sign languages, elements from the Manual alphabets (fingerspelling) may be used in signed communication, mostly for proper names and concepts for which no sign is available at that moment. Elements from the manual alphabet can sometimes be a source of new signs (e.g. initialized signs, in which the shape of the hand represents the first letter of the word for the sign).

On the whole, sign languages are independent of oral languages and follow their own paths of development. For example, British Sign Language and American Sign Language are quite different and mutually unintelligible, even though the hearing people of Britain and America share the same oral language. The grammars of sign languages do usually not resemble that of spoken languages used in the same geographical area; in fact, in terms of syntax, ASL shares more with spoken Japanese than it does with English.[18]

Similarly, countries which use a single oral language throughout may have two or more sign languages; whereas an area that contains more than one oral language might use only one sign language. South Africa, which has 11 official oral languages and a similar number of other widely used oral languages is a good example of this. It has only one sign language with two variants due to its history of having two major educational institutions for the deaf which have served different geographic areas of the country.

Spatial grammar and simultaneity

Sign languages exploit the unique features of the visual medium (sight), but may also exploit tactile features (tactile sign languages). Oral language is by and large linear; only one sound can be made or received at a time. Sign language, on the other hand, is visual and, hence, can use simultaneous expression, although this is limited articulatorily and linguistically. Visual perception allows processing of simultaneous information.

One way in which many signed languages take advantage of the spatial nature of the language is through the use of classifiers. Classifiers allow a signer to spatially show a referent's type, size, shape, movement, or extent.

The large focus on the possibility of simultaneity in signed languages in contrast to spoken languages is sometimes exaggerated, though. The use of two manual articulators is subject to motor constraints, resulting in a large extent of symmetry[19] or signing with one articulator only.

Non-manual signs

Signed languages convey much of their prosody through non-manual signs. Postures or movements of the body, head, eyebrows, eyes, cheeks, and mouth are used in various combinations to show several categories of information, including lexical distinction, grammatical structure, adjectival or adverbial content, and discourse functions.

In ASL, some signs have required facial components that distinguish them from other signs. An example of this sort of lexical distinction is the sign translated 'not yet', which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features it would be interpreted as 'late'.[20]

Grammatical structure that is shown through non-manual signs includes questions, negation, relative clauses,[21] boundaries between sentences,[22] and the argument structure of some verbs.[23] ASL and BSL use similar non-manual marking for yes/no questions, for example. They are shown through raised eyebrows and a forward head tilt.[24][25]

Some adjectival and adverbial information is conveyed through non-manual signs, but what these signs are varies from language to language. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means 'carelessly,' but a similar sign in BSL means 'boring' or 'unpleasant.'[25]

Discourse functions such as turn taking are largely regulated through head movement and eye gaze. Since the addressee in a signed conversation must be watching the signer, a signer can avoid letting the other person have a turn by not looking at them, or can indicate that the other person may have a turn by making eye contact.[26]

Iconicity in signed languages

The first studies on iconicity in ASL were published in the late 1970s, and early 1980s. Many early signed language linguists rejected the notion that iconicity was an important aspect of the language.[27][28] Though they recognized that certain aspects of the language seemed iconic, they considered this to be merely extralinguistic, a property which did not influence the language. Frishberg (1975) wrote a very influential paper addressing the relationship between arbitrariness and iconicity in ASL. She concluded that though originally present in many signs, iconicity is degraded over time through the application of grammatical processes. In other words, over time, the natural processes of regularization in the language obscures any iconically motivated features of the sign.

Some researchers did dare to suggest that the properties of ASL gave it a clear advantage in terms of learning and memory.[29] Brown, a psychologist by trade, was one of the first to document this benefit. In his study, Brown found that when children were taught signs that had high levels of iconic mapping they were significantly more likely to recall the signs in a later memory task than when they were taught signs that had little or no iconic properties.

The pioneers of sign language linguistics were yolked with the task of trying to prove that ASL was a real language and not merely a collection of gestures or “English on the hands.” One of the prevailing beliefs at this time was that ‘real languages’ must consist of an arbitrary relationship between form and meaning. Thus, if ASL consisted of signs that had iconic form-meaning relationship, it could not be considered a real language. As a result, iconicity as a whole was largely neglected in research of signed languages.

The cognitive linguistics perspective rejects a more traditional definition of iconicity as a relationship between linguistic form and a concrete, real-world referent. Rather it is a set of selected correspondences between the form and meaning of a sign.[30] In this view, iconicity is grounded in a language user’s mental representation (“construal” in Cognitive Grammar). It is defined as a fully grammatical and central aspect of a signed language rather than periphery phenomena.[31]

The cognitive linguistics perspective allows for some signs to be fully iconic or partially iconic given the number of correspondences between the possible parameters of form and meaning.[32] In this way, the Israeli Sign Language (ISL) sign for ASK has parts of its form that are iconic (“movement away from the mouth” means “something coming from the mouth”), and parts that are arbitrary (the handshape, and the orientation).[33]

Many signs have metaphoric mappings as well as iconic or metonymic ones. For these signs there are three way correspondences between a form, a concrete source and an abstract target meaning. The ASL sign LEARN has this three way correspondence. The abstract target meaning is “learning.” The concrete source is putting objects into the head from books. The form is a grasping hand moving from an open palm to the forehead. The iconic correspondence is between form and concrete source. The metaphorical correspondence is between concrete source and abstract target meaning. Because the concrete source is connected to two correspondences linguistics refer to metaphorical signs as “double mapped.” [30][32][33]

Classification of sign languages

Although sign languages have emerged naturally in deaf communities alongside or among spoken languages, they are unrelated to spoken languages and have different grammatical structures at their core.

There has been very little historical linguistic research on sign languages, apart from a few comparisons of lexical data of related sign languages. Sign language typology is still in its infancy, since extensive knowledge about sign language grammars is still scarce. Although various cross-linguistic studies have been undertaken, it is difficult to use these for typological purposes. Sign languages may spread through migration, through the establishment of deaf schools (often by foreign-trained educators), or due to political domination.

Language contact and creolization is common, making clear family classifications difficult – it is often unclear whether lexical similarity is due to borrowing or a common parent language, or whether there was one or several parent languages. Contact occurs between sign languages, between signed and spoken languages (contact sign), and between sign languages and gestural systems used by the broader community. One author has speculated that Adamorobe Sign Language may be related to the "gestural trade jargon used in the markets throughout West Africa", in vocabulary and areal features including prosody and phonetics.[34]

The only comprehensive classification along these lines going beyond a simple listing of languages dates back to 1991.[37] The classification is based on the 69 sign languages from the 1988 edition of Ethnologue that were known at the time of the 1989 conference on sign languages in Montreal and 11 more languages the author added after the conference.[38]

Wittmann classification of sign languages
Primary
language
Primary
group
Auxiliary
language
Auxiliary
group
Prototype-A[39]

5

1

7

2

Prototype-R[40]

18

1

1

BSL-derived

8

DGS-derived

1 or 2

JSL-derived

2

LSF-derived

30

LSG-derived

1?

In his classification, the author distinguishes between primary and auxiliary sign languages[41] as well as between single languages and names that are thought to refer to more than one language.[42] The prototype-A class of languages includes all those sign languages that seemingly cannot be derived from any other language.[39] Prototype-R languages are languages that are remotely modelled on a prototype-A language (in many cases thought to have been FSL) by a process Kroeber (1940) called "stimulus diffusion".[40] The families of BSL, DGS, JSL, LSF (and possibly LSG) were the products of creolization and relexification of prototype languages.[43] Creolization is seen as enriching overt morphology in sign languages, as compared to reducing overt morphology in vocal languages.[44]

Typology of sign languages

Linguistic typology (going back on Edward Sapir) is based on word structure and distinguishes morphological classes such as agglutinating/concatenating, inflectional, polysynthetic, incorporating, and isolating ones.

Sign languages vary in syntactic typology as there are different word orders in different languages. For example, ÖGS is Subject-Object-Verb while ASL is Object-Subject-Verb. Correspondence to the surrounding spoken languages is not improbable.

Brentari[45][46] classifies sign languages as a whole group determined by the medium of communication (visual instead of auditive) as one group with the features monosyllabic and polymorphemic. That means, that via one syllable (i.e. one word, one sign) several morphemes can be expressed, like subject and object of a verb determine the direction of the verb's movement (inflection).

Acquisition of signed languages

Children who are exposed to a signed language from birth will acquire it, just as hearing children acquire their native spoken language.[47]

The acquisition of non-manual features follows an interesting pattern: When a word that always has a particular non-manual feature associated with it (such as a wh- question word) is learned, the non-manual aspects are attached to the word but don’t have the flexibility associated with adult use. At a certain point the non-manual features are dropped and the word is produced with no facial expression. After a few months the non-manuals reappear, this time being used the way adult signers would use them.[48]

Written forms of sign languages

Sign languages do not have a traditional or formal written form. Many Deaf people do not see a need to write their own language.[49]

Several ways to statively represent sign languages have been developed.

So far, there is no formal or consensus-wide acceptance of one or another of the aforementioned orthographies.

Sign Perception

For a native signer, sign perception influences how the mind makes sense of their visual language experience. For example, a handshape may vary based on the other signs made before or after it, but these variations are arranged in perceptual categories during its development. The mind detects handshape contrasts but groups similar handshapes together in one category.[50][51][52] Different handshapes are stored in other categories. The mind ignores some of the similarities between different perceptual categories, at the same time preserving the visual information within each perceptual category of handshape variation.

Sign languages in society

Telecommunications facilitated signing

One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair – two deaf users were able to freely communicate with each other between the fair and another city.[53] Various organizations have also conducted research on signing via videotelephony.

Sign language interpretation services via Video Remote Interpreting (VRI) or a Video Relay Service (VRS) are useful in the present-day where one of the parties is deaf, hard-of-hearing or speech-impaired (mute) and the other is Hearing. In VRI, a sign-language user and a Hearing person are in one location, and the interpreter is in another (rather than being in the same room with the clients as would normally be the case). The interpreter communicates with the sign-language user via a video telecommunications link, and with the Hearing person by an audio link. In VRS, the sign-language user, the interpreter, and the Hearing person are in three separate locations, thus allowing the two clients to talk to each other on the phone through the interpreter.

In such cases the interpretation flow is normally between a signed language and an oral language that are customarily used in the same country, such as French Sign Language (FSL) to spoken French, Spanish Sign Language (SSL) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct), etc. Multilingual sign language interpreters, who can also translate as well across principal languages (such as to and from SSL, to and from spoken English), are also available, albeit less frequently. Such activities involve considerable effort on the part of the interpreter, since sign languages are distinct natural languages with their own construction and syntax, different from the oral language used in the same country.

With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRI and VRS interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to remotely control the video camera, in order to zoom in and out or to point the camera toward the party that is signing.

Home sign

Sign systems are sometimes developed within a single family. For instance, when hearing parents with no sign language skills have a deaf child, an informal system of signs will naturally develop, unless repressed by the parents. The term for these mini-languages is home sign (sometimes homesign or kitchen sign).[54]

Home sign arises due to the absence of any other way to communicate. Within the span of a single lifetime and without the support or feedback of a community, the child naturally invents signals to facilitate the meeting of his or her communication needs. Although this kind of system is grossly inadequate for the intellectual development of a child and it comes nowhere near meeting the standards linguists use to describe a complete language, it is a common occurrence. No type of Home Sign is recognized as an official language.

Use of signs in hearing communities

Gesture is a typical component of spoken languages. More elaborate systems of manual communication have developed in places or situations where speech is not practical or permitted, such as cloistered religious communities, scuba diving, television recording studios, loud workplaces, stock exchanges, baseball, hunting (by groups such as the Kalahari bushmen), or in the game Charades.

In Rugby Union the referee uses a limited but defined set of signs to communicate his/her decisions to the spectators.

Military and police forces also use silent hand and arm signals to signal instructions/observations in combat situations.[55]

Recently, there has been a movement to teach and encourage the use of sign language with toddlers before they learn to talk, because such young children can communicate effectively with signed languages well before they are physically capable of speech. This is typically referred to as Baby Sign. There is also movement to use signed languages more with non-deaf and non-hard-of-hearing children with other causes of speech impairment or delay, for the obvious benefit of effective communication without dependence on speech.

On occasion, where the prevalence of deaf people is high enough, a deaf sign language has been taken up by an entire local community. Famous examples of this include Martha's Vineyard Sign Language in the USA, Kata Kolok in a village in Bali, Adamorobe Sign Language in Ghana and Yucatec Maya sign language in Mexico. In such communities deaf people are not socially disadvantaged.

Many Australian Aboriginal sign languages arose in a context of extensive speech taboos, such as during mourning and initiation rites. They are or were especially highly developed among the Warlpiri, ‹The template Dab button is being considered for deletion.›  [//toolserver.org/~dispenser/cgi-bin/dab_solver.py?page=Sign_language&editintro=Template:Disambiguation_needed/editintro&client=Dab_button&fixlinks=Warumungu Warumungu], Dieri, Kaytetye, Arrernte, and Warlmanpa, and are based on their respective spoken languages.

A pidgin sign language arose among tribes of American Indians in the Great Plains region of North America (see Plains Indian Sign Language). It was used to communicate among tribes with different spoken languages. There are especially users today among the Crow, Cheyenne, and Arapaho. Unlike other sign languages developed by hearing people, it shares the spatial grammar of deaf sign languages.

Gestural theory of human language origins

The gestural theory states that vocal human language developed from a gestural sign language.[56] An important question for gestural theory is what caused the shift to vocalization.[57]

Primate use of sign language

There have been several notable examples of scientists teaching non-human primates basic signs in order to communicate with humans.[58] Notable examples are:

Deaf communities and deaf culture

Deaf communities are very widespread in the world and the culture which comprises within them is very rich. Sometimes it even does not intersect with the culture of hearing because of different impediments for hard-of-hearing people to perceive audial information.

Legal recognition

Some sign languages have obtained some form of legal recognition, while others have no status at all.

Media

See also

References

  1. ^ Stokoe, William C. (1976). Dictionary of American Sign Language on Linguistic Principles. Linstok Press. ISBN 0-932130-01-1.
  2. ^ Stokoe, William C. (1960). Sign language structure: An outline of the visual communication systems of the American deaf. Studies in linguistics: Occasional papers (No. 8). Buffalo: Dept. of Anthropology and Linguistics, University of Buffalo.
  3. ^ Bauman, Dirksen (2008). Open your eyes: Deaf studies talking. University of Minnesota Press. ISBN 0816646198. 
  4. ^ Babylonian Talmud Gittin folio 59a
  5. ^ Pablo Bonet, J. de (1620) Reduction de las letras y Arte para enseñar á ablar los Mudos. Ed. Abarca de Angulo, Madrid, ejemplar facsímil accesible en la [1], online (spanish) scan of book, held at University of Sevilla, Spain
  6. ^ Canlas (2006).
  7. ^ http://www.ethnologue.com/show_language.asp?code=bfi
  8. ^ http://www.sil.org/silesr/abstract.asp?ref=2007-008; Steve and Dianne Parkhurst, personal communication
  9. ^ http://www.sil.org/silesr/abstract.asp?ref=2009-002
  10. ^ Lucas, Ceil, Robert Bayley and Clayton Valli. 2001. Sociolinguistic Variation in American Sign Language. Washington, DC: Gallaudet University Press.
  11. ^ Lucas, Ceil, Bayley, Robert, Clayton Valli. 2003. What's Your Sign for PIZZA? An Introduction to Variation in American Sign Language. Washington, DC: Gallaudet University Press.
  12. ^ Cf. Supalla, Ted & Rebecca Webb (1995). "The grammar of international sign: A new look at pidgin languages." In: Emmorey, Karen & Judy Reilly (eds). Language, gesture, and space. (International Conference on Theoretical Issues in Sign Language Research) Hillsdale, N.J.: Erlbaum, pp. 333–352; McKee R. & J. Napier J. (2002). "Interpreting in International Sign Pidgin: an analysis." Journal of Sign Language Linguistics 5(1).
  13. ^ Klima, Edward S.; & Bellugi, Ursula. (1979). The signs of language. Cambridge, MA: Harvard University Press. ISBN 0-674-80795-2.
  14. ^ Sandler, Wendy; & Lillo-Martin, Diane. (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
  15. ^ Johnston (1989).
  16. ^ Taub (2001).
  17. ^ Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum Associates.
  18. ^ Nakamura (1995).
  19. ^ Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
  20. ^ Liddell, Scott K. (2003). Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press.
  21. ^ Boudreault, Patrick; Mayberry, Rachel I. (2006). "Grammatical processing in American Sign Language: Age of first-language acquisition effects in relation to syntactic structure". Language and Cognitive Processes 21: 608–635. doi:10.1080/01690960500139363. 
  22. ^ Fenlon, Jordan; Denmark, Tanya; Campbell, Ruth, Woll, Bencie (2008). "Seeing sentence boundaries". Sign Language & Linguistics 10 (2): 177–200. 
  23. ^ Thompson, Robin; Emmorey, Karen; Kluender, Robert (2006). "The Relationship between Eye Gaze and Verb Agreement in American Sign Language: An Eye-tracking Study". Natural Language & Linguistic Theory 24: 571–604. doi:10.1007/s11049-005-1829-y. 
  24. ^ Baker, Charlotte, and Dennis Cokely (1980). American Sign Language: A teacher's resource text on grammar and culture. Silver Spring, MD: T.J. Publishers.
  25. ^ a b Sutton-Spence, Rachel, and Bencie Woll (1998). The linguistics of British Sign Language. Cambridge: Cambridge University Press.
  26. ^ Baker, Charlotte (1977). Regulators and turn-taking in American Sign Language discourse, in Lynn Friedman, On the other hand: New perspectives on American Sign Language. New York: Academic Press
  27. ^ Frishberg (1975)
  28. ^ Klima & Bellugi (1979)
  29. ^ Brown 1980
  30. ^ a b Taub (2001)
  31. ^ Wilcox (2004)
  32. ^ a b Wilcox (2000)
  33. ^ a b Meir (2010)
  34. ^ Frishberg (1987). See also the classification of Wittmann (1991) for the general issue of jargons as prototypes in sign language genesis.
  35. ^ See Gordon (2008), under nsr [2] and sfs [3].
  36. ^ Fischer, Susan D. et al. (2010). "Variation in East Asian Sign Language Structures" in Sign Languages, p. 499. at Google Books
  37. ^ Henri Wittmann (1991). The classification is said to be typological satisfying Jakobson's condition of genetic interpretability.
  38. ^ Wittmann's classification went into Ethnologue's database where it is still cited.[4] The subsequent edition of Ethnologue in 1992 went up to 81 sign languages and ultimately adopting Wittmann's distinction between primary and alternates sign langues (going back ultimately to Stokoe 1974) and, more vaguely, some other of his traits. The 2008 version of the 15th edition of Ethnologue is now up to 124 sign languages.
  39. ^ a b These are Adamorobe Sign Language, Armenian Sign Language, Australian Aboriginal sign languages, Hindu mudra, the Monastic sign languages, Martha's Vineyard Sign Language, Plains Indian Sign Language, Urubú-Kaapor Sign Language, Chinese Sign Language, Indo-Pakistani Sign Language (Pakistani SL is said to be R, but Indian SL to be A, though they are the same language), Japanese Sign Language, and maybe the various Thai Hill-Country sign languages, French Sign Language, Lyons Sign Language, and Nohya Maya Sign Language. Wittmann also includes, bizarrely, Chinese characters and Egyptian hieroglyphs.
  40. ^ a b These are Providencia Island, Kod Tangan Bahasa Malaysia (manually signed Malay), German, Ecuadoran, Salvadoran, Gestuno, Indo-Pakistani (Pakistani SL is said to be R, but Indian SL to be A, though they are the same language), Kenyan, Brazilian, Spanish, Nepali (with possible admixture), Penang, Rennellese, Saudi, the various Sri Lankan sign languages, and perhaps BSL, Peruvian, Tijuana, Venezuelan, and Nicaraguan sign languages.
  41. ^ Wittmann adds that this taxonomic criterion is not really applicable with any scientific rigor: Auxiliary sign languages, to the extent that they are full-fledged natural languages (and therefore included in his survey) at all, are mostly used by the deaf as well, and some primary sign languages (such as ASL and Adamorobe Sign Language) have acquired auxiliary usages.
  42. ^ Wittmann includes in this class Australian Aboriginal sign languages (at least 14 different languages), Monastic sign language, Thai Hill-Country sign languages (possibly including languages in Vietnam and Laos), and Sri Lankan sign languages (14 deaf schools with different sign languages).
  43. ^ Wittmann's references on the subject, besides his own work on creolization and relexification in vocal languages, include papers such as Fischer (1974, 1978), Deuchar (1987) and Judy Kegl's pre-1991 work on creolization in sign languages.
  44. ^ Wittmann's explanation for this is that models of acquisition and transmission for sign languages are not based on any typical parent-child relation model of direct transmission which is inducive to variation and change to a greater extent. He notes that sign creoles are much more common than vocal creoles and that we can't know on how many successive creolizations prototype-A sign languages are based prior to their historicity.
  45. ^ Brentari, Diane (1998): A prosodic model of sign language phonology. Cambridge, MA: MIT Press; cited in Hohenberger (2007) on p. 349
  46. ^ Brentari, Diane (2002): Modality differences in sign language phonology and morphophonemics. In: Richard P. Meier, Kearsy Cormier, and David Quinto-Pozos (eds.), 35-36; cited in Hohenberger (2007) on p. 349
  47. ^ Emmorey, Karen (2002). Language, Cognition, and the Brain. Mahwah, NJ: Lawrence Erlbaum Associates. 
  48. ^ Reilly, Judy (2005). "How Faces Come to Serve Grammar: The Development of Nonmanual Morphology in American Sign Language". In Brenda Schick, Marc Marschack, and Patricia Elizabeth Spencer. Advances in the Sign Language Development of Deaf Children. Cary, NC: Oxford University Press. pp. 262–290. ISBN 9780198039969. 
  49. ^ Hopkins, Jason. 2008. Choosing how to write sign language: a sociolinguistic perspective. International Journal of the Sociology of Language 192:75–90.
  50. ^ dsdj.gallaudet.edu: http://dsdj.gallaudet.edu/assets/section/section2/entry94/DSDJ_entry94.pdf
  51. ^ Kuhl, P. (1991). Human adults and human infants show a ‘perceptual magnet effect’ for the prototypes of speech categories, monkeys do not. Perception and Psychophysics, 50, 93-107.
  52. ^ Morford, J. P., Grieve-Smith, A. B., MacFarlane, J., Staley, J. & Waters, G. S. Effects of language experience on the perception of American Sign Language. Cognition, 109, 41-53, 2008.
  53. ^ Bell Laboratories RECORD (1969) A collection of several articles on the AT&T Picturephone (then about to be released) Bell Laboratories, Pg.134–153 & 160–187, Volume 47, No. 5, May/June 1969;
  54. ^ Susan Goldin-Meadow (Goldin-Meadow 2003, Van Deusen, Goldin-Meadow & Miller 2001) has done extensive work on home sign systems. Adam Kendon (1988) published a seminal study of the homesign system of a deaf ‹The template Dab button is being considered for deletion.›  [//toolserver.org/~dispenser/cgi-bin/dab_solver.py?page=Sign_language&editintro=Template:Disambiguation_needed/editintro&client=Dab_button&fixlinks=Enga Enga] woman from the Papua New Guinea highlands, with special emphasis on iconicity.
  55. ^ http://rotc.okstate.edu/pdf/FM%2021-60%20Visual%20Signals.pdf
  56. ^ Hewes (1973), Premack & Premack (1983), Kimura (1993), Newman (2002), Wittmann (1980, 1991)
  57. ^ Kolb & Whishaw (2003)
  58. ^ Premack & Pemack (1983), Premack (1985), Wittmann (1991).

Bibliography

Further reading

External links

Note: the articles for specific sign languages (e.g. ASL or BSL) may contain further external links, e.g. for learning those languages.